Goto

Collaborating Authors

 root node


What Makes and Breaks Safety Fine tuning A Mechanistic Study

Neural Information Processing Systems

Safety fine-tuning helps align Large Language Models (LLMs) with human preferences for their safe deployment. To better understand the underlying factors that make models safe via safety fine-tuning, we design a synthetic data generation framework that captures salient aspects of an unsafe input by modeling the interaction between the task the model is asked to perform (e.g., "design") versus the specific concepts the task is asked to be performed upon (e.g., a "cycle" vs. a "bomb").




A Problem Formulation using L1 and L

Neural Information Processing Systems

Proof of Lemma 2. Let U be the data set associated to ν. Proof of Lemma 3. First, we prove that the property holds for the root node. We wish to prove the property for some unexplored leaf after the iteration. This is trivial if the leaf ν is not expanded in that iteration. Suppose the leaf ν is expanded. Proof of Lemma 5. From Lemma 2, we note that Q Consider any path from the root to a leaf whose length is mK for some integer K > 0. We note that for each node ν and any of its children ν (Lemma 5).




The Distribution of Dependency Distance and Hierarchical Distance in Contemporary Written Japanese and Its Influencing Factors

Wang, Linxuan, Yu, Shuiyuan

arXiv.org Artificial Intelligence

To explore the relationship between dependency distance (DD) and hierarchical distance (HD) in Japanese, we compared the probability distributions of DD and HD with and without sentence length fixed, and analyzed the changes in mean dependency distance (MDD) and mean hierarchical distance (MHD) as sentence length increases, along with their correlation coefficient based on the Balanced Corpus of Contemporary Written Japanese. It was found that the valency of the predicates is the underlying factor behind the trade-off relation between MDD and MHD in Japanese. Native speakers of Japanese regulate the linear complexity and hierarchical complexity through the valency of the predicates, and the relative sizes of MDD and MHD depend on whether the threshold of valency has been reached. Apart from the cognitive load, the valency of the predicates also affects the probability distributions of DD and HD. The effect of the valency of the predicates on the distribution of HD is greater than on that of DD, which leads to differences in their probability distributions and causes the mean of MDD to be lower than that of MHD.


CAHS-Attack: CLIP-Aware Heuristic Search Attack Method for Stable Diffusion

Xia, Shuhan, Dai, Jing, Ouyang, Hui, Shang, Yadong, Zhao, Dongxiao, Li, Peipei

arXiv.org Artificial Intelligence

Abstract--Diffusion models exhibit notable fragility when faced with adversarial prompts, and strengthening attack capabilities is crucial for uncovering such vulnerabilities and building more robust generative systems. Existing works often rely on white-box access to model gradients or hand-crafted prompt engineering, which is infeasible in real-world deployments due to restricted access or poor attack effect. In this paper, we propose CAHS-Attack, a CLIP-A ware Heuristic Search attack method. CAHS-Attack integrates Monte Carlo Tree Search (MCTS) to perform fine-grained suffix optimization, leveraging a constrained genetic algorithm to preselect high-potential adversarial prompts as root nodes, and retaining the most semantically disruptive outcome at each simulation rollout for efficient local search. Extensive experiments demonstrate that our method achieves state-of-the-art attack performance across both short and long prompts of varying semantics. Furthermore, we find that the fragility of SD models can be attributed to the inherent vulnerability of their CLIP-based text encoders, suggesting a fundamental security risk in current text-to-image pipelines. In recent years, advances in text-to-image generation have led to the emergence of powerful models such as Stable Diffusion (SD) [1], [2], FLUX [3], and MMaDA [4], enabling users to create high-quality images from natural language prompts.


Steering Opinion Dynamics in Signed Time-Varying Networks via External Control Input

Priya, Swati, Tripathy, Twinkle

arXiv.org Artificial Intelligence

Abstract-- This paper studies targeted opinion formation in multi-agent systems evolving over signed, time-varying directed graphs. The dynamics of each agent's state follow a Laplacian-based update rule driven by both cooperative and antagonistic interactions in the presence of exogenous factors. We formulate these exogenous factors as external control inputs and establish a suitable controller design methodology enabling collective opinion to converge to any desired steady-state configuration, superseding the natural emergent clustering or polarization behavior imposed by persistently structurally balanced influential root nodes. Our approach leverages upper Dini derivative analysis and Gr onwall-type inequalities to establish exponential convergence for opinion magnitude towards the desired steady state configuration on networks with uniform quasi-strong δ-connectivity. Finally, the theoretical results are validated through extensive numerical simulations.


Leveraging the Power of AI and Social Interactions to Restore Trust in Public Polls

Abouelmagd, Amr Akmal, Hilal, Amr

arXiv.org Artificial Intelligence

The emergence of crowdsourced data has significantly reshaped social science, enabling extensive exploration of collective human actions, viewpoints, and societal dynamics. However, ensuring safe, fair, and reliable participation remains a persistent challenge. Traditional polling methods have seen a notable decline in engagement over recent decades, raising concerns about the credibility of collected data. Meanwhile, social and peer-to-peer networks have become increasingly widespread, but data from these platforms can suffer from credibility issues due to fraudulent or ineligible participation. In this paper, we explore how social interactions can help restore credibility in crowdsourced data collected over social networks. We present an empirical study to detect ineligible participation in a polling task through AI-based graph analysis of social interactions among imperfect participants composed of honest and dishonest actors. Our approach focuses solely on the structure of social interaction graphs, without relying on the content being shared. We simulate different levels and types of dishonest behavior among participants who attempt to propagate the task within their social networks. We conduct experiments on real-world social network datasets, using different eligibility criteria and modeling diverse participation patterns. Although structural differences in social interaction graphs introduce some performance variability, our study achieves promising results in detecting ineligibility across diverse social and behavioral profiles, with accuracy exceeding 90% in some configurations.